Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 86
1.
PLoS One ; 17(2): e0262098, 2022.
Article En | MEDLINE | ID: mdl-35213558

Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event's underlying representation and its syntactic encoding, such that-for example-the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs' visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.


Deafness/prevention & control , Linguistics/trends , Sign Language , Vision, Ocular/physiology , Deafness/physiopathology , Female , Fingers/physiology , Hand/physiology , Humans , Judgment , Male , Thumb/physiology
2.
Proc Natl Acad Sci U S A ; 118(51)2021 12 21.
Article En | MEDLINE | ID: mdl-34916287

The surge of post-truth political argumentation suggests that we are living in a special historical period when it comes to the balance between emotion and reasoning. To explore if this is indeed the case, we analyze language in millions of books covering the period from 1850 to 2019 represented in Google nGram data. We show that the use of words associated with rationality, such as "determine" and "conclusion," rose systematically after 1850, while words related to human experience such as "feel" and "believe" declined. This pattern reversed over the past decades, paralleled by a shift from a collectivistic to an individualistic focus as reflected, among other things, by the ratio of singular to plural pronouns such as "I"/"we" and "he"/"they." Interpreting this synchronous sea change in book language remains challenging. However, as we show, the nature of this reversal occurs in fiction as well as nonfiction. Moreover, the pattern of change in the ratio between sentiment and rationality flag words since 1850 also occurs in New York Times articles, suggesting that it is not an artifact of the book corpora we analyzed. Finally, we show that word trends in books parallel trends in corresponding Google search terms, supporting the idea that changes in book language do in part reflect changes in interest. All in all, our results suggest that over the past decades, there has been a marked shift in public interest from the collective to the individual, and from rationality toward emotion.


Language , Books/history , Emotions , History, 19th Century , History, 20th Century , History, 21st Century , Humans , Individuality , Language/history , Libraries, Digital/statistics & numerical data , Linguistics/history , Linguistics/trends , Newspapers as Topic/history , Newspapers as Topic/trends , Principal Component Analysis
3.
J Couns Psychol ; 68(1): 77-87, 2021 Jan.
Article En | MEDLINE | ID: mdl-32352823

Raw linguistic data within psychotherapy sessions may provide important information about clients' progress and well-being. In the current study, computerized text analytic techniques were applied to examine whether linguistic features were associated with clients' experiences of distress within and between clients and whether changes in linguistic features were associated with changes in treatment outcome. Transcripts of 729 psychotherapy sessions from 58 clients treated by 52 therapists were analyzed. Prior to each session, clients reported their distress level. Linguistic features were extracted automatically by using natural language parser for first-person singular identification and using positive and negative emotion words lexicon. The association between linguistic features and levels of distress was examined using multilevel models. At the within-client level, fewer first-person singular words, fewer negative emotional words and more positive emotional words were associated with lower distress in the same session; and fewer negative emotion words were associated with lower next session distress (rather small f2 effect sizes = 0.011 < f2 < 0.022). At the between-client level, only first session use of positive emotion words was associated with first session distress (ηp2 effect size = 0.08). A drop in the use of first-person singular words was associated with improved outcome from pre- to posttreatment (small ηp2 effect size = 0.05). The findings provide preliminary support for the association between clients' linguistic features and their fluctuating experience of distress. They point to the potential value of computerized linguistic measures to track therapeutic outcomes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Data Analysis , Linguistics/methods , Professional-Patient Relations , Psychological Distress , Psychotherapy/methods , Adult , Aged , Databases, Factual , Emotions/physiology , Female , Humans , Linguistics/trends , Male , Middle Aged , Psychotherapy/trends , Treatment Outcome , Young Adult
5.
PLoS One ; 15(7): e0236347, 2020.
Article En | MEDLINE | ID: mdl-32702022

Measuring the semantic similarity between words is important for natural language processing tasks. The traditional models of semantic similarity perform well in most cases, but when dealing with words that involve geographical context, spatial semantics of implied spatial information are rarely preserved. Geographic information retrieval (GIR) methods have focused on this issue; however, they sometimes fail to solve the problem because the spatial and textual similarities of words are considered and calculated separately. In this paper, from the perspective of spatial context, we consider the two parts as a whole-spatial context semantics, and we propose a method that measures spatial semantic similarity using a sliding geospatial context window for geo-tagged words. The proposed method was first validated with a set of simulated data and then applied to a real-world dataset from Flickr. As a result, a spatial semantic similarity model at different scales is presented. We believe this model is a necessary supplement for traditional textual-language semantic analyses of words obtained by word-embedding technologies. This study has the potential to improve the quality of recommendation systems by considering relevant spatial context semantics, and benefits linguistic semantic research by emphasising the spatial cognition among words.


Language , Linguistics/trends , Natural Language Processing , Semantics , Algorithms , Comprehension , Humans , Information Storage and Retrieval , PubMed
6.
PLoS One ; 15(5): e0232938, 2020.
Article En | MEDLINE | ID: mdl-32459802

Stretched words like 'heellllp' or 'heyyyyy' are a regular feature of spoken language, often used to emphasize or exaggerate the underlying meaning of the root word. While stretched words are rarely found in formal written language and dictionaries, they are prevalent within social media. In this paper, we examine the frequency distributions of 'stretchable words' found in roughly 100 billion tweets authored over an 8 year period. We introduce two central parameters, 'balance' and 'stretch', that capture their main characteristics, and explore their dynamics by creating visual tools we call 'balance plots' and 'spelling trees'. We discuss how the tools and methods we develop here could be used to study the statistical patterns of mistypings and misspellings and be used as a basis for other linguistic research involving stretchable words, along with the potential applications in augmenting dictionaries, improving language processing, and in any area where sequence construction matters, such as genetics.


Language , Linguistics/statistics & numerical data , Linguistics/trends , Humans , Linguistics/methods , Phonetics , Reading , Social Media/statistics & numerical data , Social Media/trends
7.
PLoS One ; 15(3): e0230393, 2020.
Article En | MEDLINE | ID: mdl-32208426

The novel finding of Balakrishnan, Miller & Shankar (2008) that investors, overwhelmed by the plethora of stock investment offerings, limit their analysis and daily choices to only a small subset of stocks (i.e., herding behavior) now seems to be common wisdom (Iosebashvili, 2019). We investigate whether the introduction of an innovation in financial products designed to allow investors to trade the entire product bundle of S&P 500 stocks, namely S&P 500 index funds, altered "herding behavior" by creating a new class of index investors. We model the distribution of daily trading concentration as a power law function and examine changes over the last six decades. Intriguingly, we discover a unique pattern in the trading concentration distribution that exhibits two distinct trends. For the period 1960-75, the trading concentration of the S&P 500 stocks tracks the increasing trend for the entire market, i.e., the unevenness in trading has steadily increased. However, after the introduction of S&P 500 index funds in 1975, concentration of trading in the S&P 500 stocks has steadily decreased, i.e., trading distribution has become more even across all 500 stocks, contrary to the current belief of equity analysts. This is also in sharp contrast to the case of U.S. stocks that are not in the S&P 500 index where trading concentration has steadily increased. We further corroborate the uniqueness of the inverted V-shape by a counterfactual investigation of the trading concentration patterns for other sets of 500 stock portfolios. This uniquely distinctive trading concentration pattern for S&P 500 stocks appears to be driven by the increasing dominance of bundle trading by index investors.


Economics , Investments/economics , Models, Economic , Social Behavior , Decision Making , Financial Management/economics , Humans , Linear Models , Linguistics/trends
8.
PLoS Biol ; 17(11): e3000389, 2019 11.
Article En | MEDLINE | ID: mdl-31774810

Recently, prominent theoretical linguists have argued for an explicit scenario for the evolution of the human language capacity on the basis of its computational properties. Concretely, the simplicity of a minimalist formulation of the operation Merge, which allows humans to recursively compute hierarchical relations in language, has been used to promote a sudden-emergence, single-mutation scenario. In support of this view, Merge is said to be either fully present or fully absent: one cannot have half-Merge. On this basis, it is inferred that the emergence of our fully fledged language capacity had to be sudden. Thus, proponents of this view draw a parallelism between the formal complexity of the operation at the computational level and the number of evolutionary steps it must imply. Here, we examine this argument in detail and show that the jump from the atomicity of Merge to a single-mutation scenario is not valid and therefore cannot be used as justification for a theory of language evolution along those lines.


Linguistics/classification , Linguistics/trends , Biological Evolution , Humans , Language
9.
Acta Psychol (Amst) ; 199: 102903, 2019 Aug.
Article En | MEDLINE | ID: mdl-31470173

Dyslexia is often characterized by disordered word recognition and spelling, though dysfunction on various non-linguistic tasks suggests a more pervasive deficit may underlie reading and spelling abilities. The serial-order learning impairment in dyslexia (SOLID) hypothesis proposes that sequence learning impairments fundamentally disrupt cognitive abilities, including linguistic processes, among individuals with dyslexia; yet only some studies report sequence learning deficits in people with dyslexia relative to controls. Evidence may be mixed because traditional sequence learning tasks often require strong motor demands, working memory processes and/or executive functions, wherein people with dyslexia can show impairments. Thus, observed sequence learning deficits in dyslexia may only appear to the extent that comorbid motor-based processes, memory capacity, or executive processes are involved. The present study measured sequence learning in college-aged students with and without dyslexia using a single task that evaluates sequencing and non-sequencing components but without strong motor, executive, or memory demands. During sequencing, each additional link in a sequence of stimuli leading to a reward is trained step-by-step, until a complete sequence is acquired. People with dyslexia made significantly more sequencing errors than controls, despite equivalent performance on non-sequencing components. Mediation analyses further revealed that sequence learning accounted for a large portion of the variance between dyslexia status and linguistic abilities, particularly pseudo-word reading. These findings extend the SOLID hypothesis by showing difficulties in the ability to acquire sequences that may play an underlying role in literacy acquisition.


Aptitude/physiology , Dyslexia/psychology , Linguistics/trends , Reading , Serial Learning/physiology , Adult , Dyslexia/diagnosis , Dyslexia/physiopathology , Executive Function/physiology , Female , Forecasting , Humans , Male , Memory/physiology , Psychomotor Performance/physiology , Young Adult
10.
Dev Cogn Neurosci ; 39: 100672, 2019 10.
Article En | MEDLINE | ID: mdl-31430627

Hearing in noisy environments is a complicated task that engages attention, memory, linguistic knowledge, and precise auditory-neurophysiological processing of sound. Accumulating evidence in school-aged children and adults suggests these mechanisms vary with the task's demands. For instance, co-located speech and noise demands a large cognitive load and recruits working memory, while spatially separating speech and noise diminishes this load and draws on alternative skills. Past research has focused on one or two mechanisms underlying speech-in-noise perception in isolation; few studies have considered multiple factors in tandem, or how they interact during critical developmental years. This project sought to test complementary hypotheses involving neurophysiological, cognitive, and linguistic processes supporting speech-in-noise perception in young children under different masking conditions (co-located, spatially separated). Structural equation modeling was used to identify latent constructs and examine their contributions as predictors. Results reveal cognitive and language skills operate as a single factor supporting speech-in-noise perception under different masking conditions. While neural coding of the F0 supports perception in both co-located and spatially separated conditions, neural timing predicts perception of spatially separated listening exclusively. Together, these results suggest co-located and spatially separated speech-in-noise perception draw on similar cognitive/linguistic skills, but distinct neural factors, in early childhood.


Cognition/physiology , Language Development , Linguistics , Noise , Speech Perception/physiology , Auditory Perception/physiology , Child , Child, Preschool , Female , Forecasting , Hearing/physiology , Humans , Linguistics/trends , Male , Neurophysiology
11.
J Alzheimers Dis ; 71(2): 377-388, 2019.
Article En | MEDLINE | ID: mdl-31381516

Despite the large number of elderly bilinguals at risk for Alzheimer's disease (AD) and dementia worldwide, significant questions remain about the relationship between speaking more than one language and later cognitive decline. Bilingualism may impact on cognitive and neural reserve, time of onset of dementia symptoms and neuropathology, and linguistic competency in dementia. This review indicates increased cognitive reserve from executive (monitoring, selecting, inhibiting) control of two languages and increased neural reserve involving left frontal and related areas for language control. Many, but not all, studies indicate a delay in dementia symptom onset but worse hippocampal and mesiotemporal atrophy among bilinguals versus monolinguals with AD. In contrast, bilinguals do worse on language measures, and bilinguals with AD or dementia have difficulty maintaining and monitoring their second language. Together, these studies suggest that early-acquired and proficient bilingualism increases reserve through frontal-predominant executive control, and these executive abilities compensate for early dementia symptoms, delaying their onset but not the neuropathology of their disease. Finally, as executive control decreases further with advancing dementia, there is increasing difficulty inhibiting the dominant first language and staying in the second language. These conclusions must be interpreted with caution, given the problems inherent in this type of research; however, they do recommend more work on the pre-dementia neuroprotective effects and the dementia-related language impairments of bilingualism.


Cognitive Reserve/physiology , Dementia/psychology , Executive Function/physiology , Linguistics , Multilingualism , Dementia/diagnosis , Humans , Language , Linguistics/trends
12.
J Alzheimers Dis ; 70(4): 1163-1174, 2019.
Article En | MEDLINE | ID: mdl-31322577

BACKGROUND: Recently, many studies have been carried out to detect Alzheimer's disease (AD) from continuous speech by linguistic analysis and modeling. However, few of them utilize language models (LMs) to extract linguistic features and to investigate the lexical-level differences between AD and healthy speech. OBJECTIVE: Our goals include obtaining state-of-art performance of automatic AD detection, emphasizing N-gram LMs as powerful tools for distinguishing AD patients' narratives from those of healthy controls, and discovering the differences of lexical usages between AD patients and healthy people. METHOD: We utilize a subset of the DementiaBank corpus, including 242 control samples from 99 control participants and 256 AD samples from 169 "PossibleAD" or "ProbableAD" participants. Baseline models are built through area under curve-based feature selection and using five machine learning algorithms for comparison. Perplexity features are extracted using LMs to build enhanced detection models. Finally, the differences of lexical usages between AD patients and healthy people are investigated by a proportion test based on unigram probabilities. RESULTS: Our baseline model obtains a detection accuracy of 80.7%. This accuracy increases to 85.4% after integrating the perplexity features derived from LMs. Further investigations show that AD patients tend to use more general, less informative, and less accurate words to describe characters and actions than healthy controls. CONCLUSION: The perplexity features extracted by LMs can benefit the automatic AD detection from continuous speech. There exist lexical-level differences between AD and healthy speech that can be captured by statistical N-gram LMs.


Alzheimer Disease/diagnosis , Alzheimer Disease/physiopathology , Language , Machine Learning , Narration , Speech/physiology , Aged , Aged, 80 and over , Alzheimer Disease/psychology , Female , Humans , Linguistics/trends , Male , Middle Aged , Neuropsychological Tests , Photic Stimulation/methods
13.
J Biosci ; 44(1)2019 Mar.
Article En | MEDLINE | ID: mdl-30837376

Associating human genetic makeup with the faculty of language has long been a goal for biolinguistics. This stimulated the idea that language is attributed to genes and language disabilities are caused by genetic mutations. However, application of genetic knowledge on language intervention is still a gap in the existing literature. In an effort to bridge this gap, this article presents an account of genetic and neural associations of language and synthesizes the genetic, neural, epigenetic and environmental facets involved in language. In addition to describing the association of genes with language, the neural and epigenetic aspects of language are also explored. Further, the environmental aspects of language such as language input, emotion and cognition are also traced back to gene expressions. Therefore, effective language intervention for language learning difficulties must offer genetics-informed solutions, both linguistic and medical.


Epigenesis, Genetic/genetics , Language Disorders/genetics , Language , Linguistics/trends , Brain/physiology , Cognition/physiology , Gene-Environment Interaction , Humans , Learning/physiology
14.
Psychol Sci ; 30(3): 455-466, 2019 03.
Article En | MEDLINE | ID: mdl-30721119

The roots of gender disparities in science achievement take hold in early childhood. The present studies aimed to identify a modifiable feature of young children's environments that could be targeted to reduce gender differences in science behavior among young children. Four experimental studies with children ( N = 501) revealed that describing science in terms of actions ("Let's do science! Doing science means exploring the world!") instead of identities ("Let's be scientists! Scientists explore the world!") increased girls' subsequent persistence in new science games designed to illustrate the scientific method. These studies thus identified subtle but powerful linguistic cues that could be targeted to help reduce gender disparities in science engagement in early childhood.


Cognition/physiology , Linguistics/trends , Motivation/physiology , Science/education , Students/psychology , Achievement , Child , Child, Preschool , Cues , Female , Humans , Male , Sex Factors , Students/statistics & numerical data
15.
Psychol Aging ; 34(1): 43-55, 2019 Feb.
Article En | MEDLINE | ID: mdl-30284854

Verbal working memory-intensive sentence processing declines with age. This might reflect older adults' difficulties with reducing the memory load by grouping single words into multiword chunks. Here we used a serial order task emphasizing syntactic and semantic relations. We evaluated the extent to which older compared with younger adults may differentially use linguistic constraints during sentence processing to cope with verbal working memory limitations. Probing syntactic-semantic interactions, age differences were hypothesized to be confined to the use of syntactic constraints and to be accompanied by an increased reliance on semantic information. Two experiments varying in verbal working memory demands were conducted: the sequence length was increased from eight items in Experiment 1 to 11 items in Experiment 2. We found the use of syntactic constraints to be compromised with aging, while the benefit of semantic information for sentence processing was comparable across age groups. Hence, we suggest that semantic information processing may become relatively more important for successful sentence processing with advancing adult age, possibly inducing a syntactic-to-semantic-processing strategy shift. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Aging/physiology , Aging/psychology , Linguistics/trends , Memory, Short-Term/physiology , Reading , Semantics , Adult , Aged , Cognition/physiology , Female , Humans , Language , Male , Middle Aged , Photic Stimulation/methods , Young Adult
17.
Nat Hum Behav ; 2(11): 816-821, 2018 11.
Article En | MEDLINE | ID: mdl-31558817

There are more than 7,000 languages spoken in the world today1. It has been argued that the natural and social environment of languages drives this diversity2-13. However, a fundamental question is how strong are environmental pressures, and does neutral drift suffice as a mechanism to explain diversification? We estimate the phylogenetic signals of geographic dimensions, distance to water, climate and population size on more than 6,000 phylogenetic trees of 46 language families. Phylogenetic signals of environmental factors are generally stronger than expected under the null hypothesis of no relationship with the shape of family trees. Importantly, they are also-in most cases-not compatible with neutral drift models of constant-rate change across the family tree branches. Our results suggest that language diversification is driven by further adaptive and non-adaptive pressures. Language diversity cannot be understood without modelling the pressures that physical, ecological and social factors exert on language users in different environments across the globe.


Environment , Language , Phylogeography/methods , Humans , Linguistics/trends , Phylogeny , Sociobiology/methods
18.
Infant Behav Dev ; 50: 52-63, 2018 02.
Article En | MEDLINE | ID: mdl-29131969

The effects of Communicative functions and Mind-Mindedness on children's language development have been typically investigated in separate studies. The present longitudinal research was therefore designed to yield new insight into the simultaneous impact of these two dimensions of maternal responsiveness on the acquisition of expressive language skills in a sample of 25 mother-child dyads. The frequencies of five communicative functions (Tutorial, Didactic, Conversational, Control and Asynchronous) and two types of mind-related comments (attuned vs. non-attuned) were assessed from a 15-min play session at 16 months. Children's expressive language was examined at both 16 months (number of word types and tokens produced, and number of words attributed to the child in the Questionnaire for Communication and Early Language development) and 20 months (number of internal and non-internal words attributed to the child in the Italian version of the Mac Arthur-Bates Communicative Development Inventory). The main finding was that mothers' use of attuned mind-related comments at 16 months predicted internal state language at 20 months, above and beyond the effects of CFs and children's linguistic ability at 16 months; in addition, mothers' Tutorial function at 16 months marginally predicted non-internal state language at 20 months, after controlling for MM and children's linguistic ability at 16 months. These results suggest that different expressions of maternal responsiveness influence distinct aspects of children's expressive language in the second year of life, although the effects of MM appear to be more robust.


Child Language , Communication , Language Development , Maternal Behavior/psychology , Mother-Child Relations/psychology , Thinking , Adult , Female , Forecasting , Humans , Infant , Linguistics/trends , Longitudinal Studies , Male , Thinking/physiology
...